Goto

Collaborating Authors

 bi-directional lstm


Detecting mental disorder on social media: a ChatGPT-augmented explainable approach

Belcastro, Loris, Cantini, Riccardo, Marozzo, Fabrizio, Talia, Domenico, Trunfio, Paolo

arXiv.org Artificial Intelligence

In the digital era, the prevalence of depressive symptoms expressed on social media has raised serious concerns, necessitating advanced methodologies for timely detection. This paper addresses the challenge of interpretable depression detection by proposing a novel methodology that effectively combines Large Language Models (LLMs) with eXplainable Artificial Intelligence (XAI) and conversational agents like ChatGPT. In our methodology, explanations are achieved by integrating BERTweet, a Twitter-specific variant of BERT, into a novel self-explanatory model, namely BERT-XDD, capable of providing both classification and explanations via masked attention. The interpretability is further enhanced using ChatGPT to transform technical explanations into human-readable commentaries. By introducing an effective and modular approach for interpretable depression detection, our methodology can contribute to the development of socially responsible digital platforms, fostering early intervention and support for mental health challenges under the guidance of qualified healthcare professionals.


Piano and Drums Meets AI. Predict African poems with TensorFlow

#artificialintelligence

The late Gabriel Opara was an African poet who wrote a lot of wonderful poems, including the popular "Piano and Drums," which is my favorite. I fell in love with the poem in my high school days when I did literature in English. Having completed the Deep Learning specialization and Tensorflow developer certification courses on Coursera, I decided to put my knowledge of Natural Language Processing into practice by applying AI to the things that interest me, of which African poems are one. Data collection is usually among the early stages of any machine learning project. I got all the data needed for this project from the public web. The data set consists of 14 poems by Gabriel in about 408 lines; the first 10 lines are the titles of the poems.


Visualize AI: Solve Challenges and Exploit Opportunities - ValueWalk

#artificialintelligence

Every day, new organizations announce how AI is revolutionizing the industry with disruptive results . As more and more business decisions are based on AI and advanced data analytics it is critical to provide transparency to the inner workings within that technology. McKinsey Global InstituteHarvard Business Review According to a recent McKinsey Global Institute analysis, the financial services sector is a leading adopter of AI and has the most ambitious AI investment plans. In a related article by the Harvard Business Review, adoption will center on AI technologies like neural-based machine learning and natural language processing because those are the technologies that are beginning to mature and prove their value. Below, we explore a challenge and opportunity that is unique to the rapid adoption of machine learning.


Understanding the Outputs of Multi-Layer Bi-Directional LSTMs

#artificialintelligence

In the world of machine learning, long short-term memory networks (LSTMs) are a powerful tool for processing sequences of data such as speech, text, and video. The recurrent nature of LSTMs allows them to remember pieces of data that they have seen earlier in the sequence. Conceptually, this is easier to understand in the forward direction (i.e., start to finish), but it can also be useful to consider the sequence in the opposite direction (i.e., finish to start). Joe likes blank, especially if they're fried, scrambled, or poached. In the forward direction, the only information available before reaching the missing word is "Joe likes blank ", which could have any number of possibilities.


Understanding the Outputs of Multi-Layer Bi-Directional LSTMs

#artificialintelligence

In the world of machine learning, long short-term memory networks (LSTMs) are a powerful tool for processing sequences of data such as speech, text, and video. The recurrent nature of LSTMs allows them to remember pieces of data that they have seen earlier in the sequence. Conceptually, this is easier to understand in the forward direction (i.e., start to finish), but it can also be useful to consider the sequence in the opposite direction (i.e., finish to start). Joe likes blank, especially if they're fried, scrambled, or poached. In the forward direction, the only information available before reaching the missing word is "Joe likes blank ", which could have any number of possibilities.


Deep learning for detecting inappropriate content in text

#artificialintelligence

Today, there are a large number of online discussion fora on the internet which are meant for users to express, discuss and exchange their views and opinions on various topics. In such fora, it has been often observed that user conversations sometimes quickly derail and become inappropriate such as hurling abuses, passing rude and discourteous comments on individuals or certain groups/communities. Similarly, some virtual agents or bots have also been found to respond back to users with inappropriate messages. As a result, inappropriate messages or comments are turning into an online menace slowly degrading the effectiveness of user experiences. Hence, automatic detection and filtering of such inappropriate language has become an important problem for improving the quality of conversations with users as well as virtual agents.


DOLORES: Deep Contextualized Knowledge Graph Embeddings

Wang, Haoyu, Kulkarni, Vivek, Wang, William Yang

arXiv.org Artificial Intelligence

We introduce a new method DOLORES for learning knowledge graph embeddings that effectively captures contextual cues and dependencies among entities and relations. First, we note that short paths on knowledge graphs comprising of chains of entities and relations can encode valuable information regarding their contextual usage. We operationalize this notion by representing knowledge graphs not as a collection of triples but as a collection of entity-relation chains, and learn embeddings for entities and relations using deep neural models that capture such contextual usage. In particular, our model is based on Bi-Directional LSTMs and learn deep representations of entities and relations from constructed entity-relation chains. We show that these representations can very easily be incorporated into existing models to significantly advance the state of the art on several knowledge graph prediction tasks like link prediction, triple classification, and missing relation type prediction (in some cases by at least 9.5%).


How to read: Character level deep learning

#artificialintelligence

Chat bots seem to be extremely popular these days, every other tech company is announcing some form of intelligent language interface. The truth is that language is everywhere, it's the way we communicate and the way we manage our thoughts. Most, if not all, of our culture & knowledge is encoded and stored in some language. One can think that if we manage to tap to that source of information efficiently then we are a step closer to create ground breaking knowledge systems. Of course, chat-bots are not even close to "solving" the language problem, after all language is as broad as our thoughts.


Good Theano frameworks for implementing Bi-directional LSTM? • /r/MachineLearning

@machinelearnbot

I just need a framework in which I configure the architecture of the network and not construct/code the entire network, I found a few but they seem to lag good community support, any good one's to recommend?